17 research outputs found

    Generating Executable Action Plans with Environmentally-Aware Language Models

    Full text link
    Large Language Models (LLMs) trained using massive text datasets have recently shown promise in generating action plans for robotic agents from high level text queries. However, these models typically do not consider the robot's environment, resulting in generated plans that may not actually be executable, due to ambiguities in the planned actions or environmental constraints. In this paper, we propose an approach to generate environmentally-aware action plans that agents are better able to execute. Our approach involves integrating environmental objects and object relations as additional inputs into LLM action plan generation to provide the system with an awareness of its surroundings, resulting in plans where each generated action is mapped to objects present in the scene. We also design a novel scoring function that, along with generating the action steps and associating them with objects, helps the system disambiguate among object instances and take into account their states. We evaluated our approach using the VirtualHome simulator and the ActivityPrograms knowledge base and found that action plans generated from our system had a 310% improvement in executability and a 147% improvement in correctness over prior work. The complete code and a demo of our method is publicly available at https://github.com/hri-ironlab/scene_aware_language_planner

    Virtual-to-Real-World Transfer Learning for Robots on Wilderness Trails

    Full text link
    Robots hold promise in many scenarios involving outdoor use, such as search-and-rescue, wildlife management, and collecting data to improve environment, climate, and weather forecasting. However, autonomous navigation of outdoor trails remains a challenging problem. Recent work has sought to address this issue using deep learning. Although this approach has achieved state-of-the-art results, the deep learning paradigm may be limited due to a reliance on large amounts of annotated training data. Collecting and curating training datasets may not be feasible or practical in many situations, especially as trail conditions may change due to seasonal weather variations, storms, and natural erosion. In this paper, we explore an approach to address this issue through virtual-to-real-world transfer learning using a variety of deep learning models trained to classify the direction of a trail in an image. Our approach utilizes synthetic data gathered from virtual environments for model training, bypassing the need to collect a large amount of real images of the outdoors. We validate our approach in three main ways. First, we demonstrate that our models achieve classification accuracies upwards of 95% on our synthetic data set. Next, we utilize our classification models in the control system of a simulated robot to demonstrate feasibility. Finally, we evaluate our models on real-world trail data and demonstrate the potential of virtual-to-real-world transfer learning.Comment: iROS 201

    Extrinisic Calibration of a Camera-Arm System Through Rotation Identification

    Get PDF
    Determining extrinsic calibration parameters is a necessity in any robotic system composed of actuators and cameras. Once a system is outside the lab environment, parameters must be determined without relying on outside artifacts such as calibration targets. We propose a method that relies on structured motion of an observed arm to recover extrinsic calibration parameters. Our method combines known arm kinematics with observations of conics in the image plane to calculate maximum-likelihood estimates for calibration extrinsics. This method is validated in simulation and tested against a real-world model, yielding results consistent with ruler-based estimates. Our method shows promise for estimating the pose of a camera relative to an articulated arm's end effector without requiring tedious measurements or external artifacts. Index Terms: robotics, hand-eye problem, self-calibration, structure from motio

    Non-Invasive BCI through EEG

    Get PDF
    Thesis advisor: Robert SignorileIt has long been known that as neurons fire within the brain they produce measurable electrical activity. Electroencephalography (EEG) is the measurement and recording of these electrical signals using sensors arrayed across the scalp. Though there is copious research in using EEG technology in the fields of neuroscience and cognitive psychology, it is only recently that the possibility of utilizing EEG measurements as inputs in the control of computers has emerged. The idea of Brain-Computer Interfaces (BCIs) which allow the control of devices using brain signals evolved from the realm of science fiction to simple devices that currently exist. BCIs naturally present themselves to many extremely useful applications including prosthetic devices, restoring or aiding in communication and hearing, military applications, video gaming and virtual reality, and robotic control, and have the possibility of significantly improving the quality of life of many disabled individuals. However, current BCIs suffer from many problems including inaccuracies, delays between thought, detection, and action, exorbitant costs, and invasive surgeries. The purpose of this research is to examine the Emotiv EPOC© System as a cost-effective gateway to non-invasive portable EEG measurements and utilize it to build a thought-based BCI to control the Parallax Scribbler® robot. This research furthers the analysis of the current pros and cons of EEG technology as it pertains to BCIs and offers a glimpse of the future potential capabilities of BCI systems.Thesis (BA) — Boston College, 2010.Submitted to: Boston College. College of Arts and Sciences.Discipline: Computer Science Honors Program.Discipline: Computer Science

    TOBY: A Tool for Exploring Data in Academic Survey Papers

    Full text link
    This paper describes TOBY, a visualization tool that helps a user explore the contents of an academic survey paper. The visualization consists of four components: a hierarchical view of taxonomic data in the survey, a document similarity view in the space of taxonomic classes, a network view of citations, and a new paper recommendation tool. In this paper, we will discuss these features in the context of three separate deployments of the tool

    Training augmentation using additive sensory noise in a lunar rover navigation task

    Get PDF
    BackgroundThe uncertain environments of future space missions means that astronauts will need to acquire new skills rapidly; thus, a non-invasive method to enhance learning of complex tasks is desirable. Stochastic resonance (SR) is a phenomenon where adding noise improves the throughput of a weak signal. SR has been shown to improve perception and cognitive performance in certain individuals. However, the learning of operational tasks and behavioral health effects of repeated noise exposure aimed to elicit SR are unknown.ObjectiveWe evaluated the long-term impacts and acceptability of repeated auditory white noise (AWN) and/or noisy galvanic vestibular stimulation (nGVS) on operational learning and behavioral health.MethodsSubjects (n = 24) participated in a time longitudinal experiment to access learning and behavioral health. Subjects were assigned to one of our four treatments: sham, AWN (55 dB SPL), nGVS (0.5 mA), and their combination to create a multi-modal SR (MMSR) condition. To assess the effects of additive noise on learning, these treatments were administered continuously during a lunar rover simulation in virtual reality. To assess behavioral health, subjects completed daily, subjective questionnaires related to their mood, sleep, stress, and their perceived acceptance of noise stimulation.ResultsWe found that subjects learned the lunar rover task over time, as shown by significantly lower power required for the rover to complete traverses (p < 0.005) and increased object identification accuracy in the environment (p = 0.05), but this was not influenced by additive SR noise (p = 0.58). We found no influence of noise on mood or stress following stimulation (p > 0.09). We found marginally significant longitudinal effects of noise on behavioral health (p = 0.06) as measured by strain and sleep. We found slight differences in stimulation acceptability between treatment groups, and notably nGVS was found to be more distracting than sham (p = 0.006).ConclusionOur results suggest that repeatedly administering sensory noise does not improve long-term operational learning performance or affect behavioral health. We also find that repetitive noise administration is acceptable in this context. While additive noise does not improve performance in this paradigm, if it were used for other contexts, it appears acceptable without negative longitudinal effects
    corecore